4,725 research outputs found

    Wearable inertial sensors and range of motion metrics in physical therapy remote support

    Get PDF
    Abstract. The practice of physiotherapy diagnoses patient ailments which are often treated by the daily repetition of prescribed physiotherapeutic exercise. The effectiveness of the exercise regime is dependent on regular daily repetition of the regime and the correct execution of the prescribed exercises. Patients often have issues learning unfamiliar exercises and performing the exercise with good technique. This design science research study examines a back squat classifier design to appraise patient exercise regime away from the physiotherapy practice. The scope of the exercise appraisal is limited to one exercise, the back squat. Kinematic data captured with commercial inertial sensors is presented to a small group of physiotherapists to illustrate the potential of the technology to measure range of motion (ROM) for back squat appraisal. Opinions are considered from two fields of physiotherapy, general musculoskeletal and post-operative rehabilitation. While the exercise classifier is considered not suitable for post-operative rehabilitation, the opinions expressed for use in general musculoskeletal physiotherapy are positive. Kinematic data captured with gyroscope sensors in the sagittal plane is analysed with Matlab to develop a method for back squat exercise recognition and appraisal. The artefact, a back squat classifier with appraisal features is constructed from Matlab scripts which are proven to be effective with kinematic data from a novice athlete

    Institutions and government growth: a comparison of the 1890s and the 1930s

    Get PDF
    Statistics on the size and growth of the U.S. federal government, in addition to public statements by President Franklin Roosevelt, seem to indicate that the Great Depression was the primary event that caused the dramatic growth in government spending and intervention in the private sector that continues to the present day. Through a comparison of the economic conditions of the 1890s and the 1930s, the authors argue that post-1930 government growth in the United States is not the direct result of the Great Depression, but rather is a result of institutional, legal, and societal changes that began in the late 1800s. Thus, the Great Depression did likely trigger increases in government spending and regulatory involvement, but historical factors produced the conditions that tended to lend permanence to the growth of government that occurred during the Great Depression.Federal government ; Depressions

    Bayesian Spatial Binary Regression for Label Fusion in Structural Neuroimaging

    Full text link
    Many analyses of neuroimaging data involve studying one or more regions of interest (ROIs) in a brain image. In order to do so, each ROI must first be identified. Since every brain is unique, the location, size, and shape of each ROI varies across subjects. Thus, each ROI in a brain image must either be manually identified or (semi-) automatically delineated, a task referred to as segmentation. Automatic segmentation often involves mapping a previously manually segmented image to a new brain image and propagating the labels to obtain an estimate of where each ROI is located in the new image. A more recent approach to this problem is to propagate labels from multiple manually segmented atlases and combine the results using a process known as label fusion. To date, most label fusion algorithms either employ voting procedures or impose prior structure and subsequently find the maximum a posteriori estimator (i.e., the posterior mode) through optimization. We propose using a fully Bayesian spatial regression model for label fusion that facilitates direct incorporation of covariate information while making accessible the entire posterior distribution. We discuss the implementation of our model via Markov chain Monte Carlo and illustrate the procedure through both simulation and application to segmentation of the hippocampus, an anatomical structure known to be associated with Alzheimer's disease.Comment: 24 pages, 10 figure

    Self-immolative linkers in polymeric delivery systems

    Get PDF
    There has been significant interest in the methodologies of controlled release for a diverse range of applications spanning drug delivery, biological and chemical sensors, and diagnostics. The advancement in novel substrate-polymer coupling moieties has led to the discovery of self-immolative linkers. This new class of linker has gained popularity in recent years in polymeric release technology as a result of stable bond formation between protecting and leaving groups, which becomes labile upon activation, leading to the rapid disassembly of the parent polymer. This ability has prompted numerous studies into the design and development of self-immolative linkers and the kinetics surrounding their disassembly. This review details the main concepts that underpin self-immolative linker technologies that feature in polymeric or dendritic conjugate systems and outlines the chemistries of amplified self-immolative elimination

    Are the Health of the Nation's targets attainable? Postal survey of general practitioners' views

    Get PDF
    The Health of the Nation's targets were introduced by the government in 1992 as part of a strategic approach to health.1 We aimed, in 1996, to elicit the views of general practitioners on the attainability of these targets

    CATHEDRAL: A Fast and Effective Algorithm to Predict Folds and Domain Boundaries from Multidomain Protein Structures

    Get PDF
    We present CATHEDRAL, an iterative protocol for determining the location of previously observed protein folds in novel multidomain protein structures. CATHEDRAL builds on the features of a fast secondary-structure–based method (using graph theory) to locate known folds within a multidomain context and a residue-based, double-dynamic programming algorithm, which is used to align members of the target fold groups against the query protein structure to identify the closest relative and assign domain boundaries. To increase the fidelity of the assignments, a support vector machine is used to provide an optimal scoring scheme. Once a domain is verified, it is excised, and the search protocol is repeated in an iterative fashion until all recognisable domains have been identified. We have performed an initial benchmark of CATHEDRAL against other publicly available structure comparison methods using a consensus dataset of domains derived from the CATH and SCOP domain classifications. CATHEDRAL shows superior performance in fold recognition and alignment accuracy when compared with many equivalent methods. If a novel multidomain structure contains a known fold, CATHEDRAL will locate it in 90% of cases, with <1% false positives. For nearly 80% of assigned domains in a manually validated test set, the boundaries were correctly delineated within a tolerance of ten residues. For the remaining cases, previously classified domains were very remotely related to the query chain so that embellishments to the core of the fold caused significant differences in domain sizes and manual refinement of the boundaries was necessary. To put this performance in context, a well-established sequence method based on hidden Markov models was only able to detect 65% of domains, with 33% of the subsequent boundaries assigned within ten residues. Since, on average, 50% of newly determined protein structures contain more than one domain unit, and typically 90% or more of these domains are already classified in CATH, CATHEDRAL will considerably facilitate the automation of protein structure classification

    Making Open Access Viable Economically

    Full text link
    The Editors-in-Chief have decided that we will provide our much-cherished readers with an editorial every so often as a way of sharing insights from the “machine room” where so much of the thinking and work is done to publish the German Law Journal. We want to let you in on the ideas that are on our minds, share with you our observations, and include you in the conversations we are having that might be of interest to you. We begin this tradition with this issue, Volume 21 – Number 6. Andrew Hyde, a member of the editorial team with which the Journal has partnered at Cambridge University Press, as well as Russell A. Miller and Emanuel V. Towfigh, two of the Journal’s co-Editors-in-Chief, open our From the Headquarters Essay with a piece on the Journal’s experiences with and its further plans for making open-access (OA) publishing economically viable. Related to that theme, we also want to share news with you about the introduction of a voluntary article processing charge this fall. Finally, we want to draw your attention to a videos and podcasts service we will start to produce to accompany the scholarship published in the Journal as a way of promoting our authors’ work and expanding access to their ideas. If you are interested only in these latter initiatives, you can also read the short section in the GLJ Instructions for Authors

    Assessment of the learning curve in health technologies: a systematic review

    Get PDF
    Objective: We reviewed and appraised the methods by which the issue of the learning curve has been addressed during health technology assessment in the past. Method: We performed a systematic review of papers in clinical databases (BIOSIS, CINAHL, Cochrane Library, EMBASE, HealthSTAR, MEDLINE, Science Citation Index, and Social Science Citation Index) using the search term "learning curve:" Results: The clinical search retrieved 4,571 abstracts for assessment, of which 559 (12%) published articles were eligible for review. Of these, 272 were judged to have formally assessed a learning curve. The procedures assessed were minimal access (51%), other surgical (41%), and diagnostic (8%). The majority of the studies were case series (95%). Some 47% of studies addressed only individual operator performance and 52% addressed institutional performance. The data were collected prospectively in 40%, retrospectively in 26%, and the method was unclear for 31%. The statistical methods used were simple graphs (44%), splitting the data chronologically and performing a t test or chi-squared test (60%), curve fitting (12%), and other model fitting (5%). Conclusions: Learning curves are rarely considered formally in health technology assessment. Where they are, the reporting of the studies and the statistical methods used are weak. As a minimum, reporting of learning should include the number and experience of the operators and a detailed description of data collection. Improved statistical methods would enhance the assessment of health technologies that require learning
    • …
    corecore